perm filename PROPOS[S89,JMC] blob
sn#873497 filedate 1989-05-18 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 %propos[s89,jmc] Proposal to DARPA for supplementary contract
C00007 00003 \noindent Appendix B---Overcoming Unexpected Obstacles
C00017 ENDMK
C⊗;
%propos[s89,jmc] Proposal to DARPA for supplementary contract
\centerline{PROPOSAL FOR RESEARCH IN THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE}
\bigskip
\bigskip
This is a proposal for support of research by John
McCarthy
and Vladimir Lifschitz
in the foundations of artificial intelligence. The goal
is to lay the scientific foundations for a database of common
sense knowledge that could be used by any AI systems. The cost
will be xxx over a three year period starting in September 1989,
and will support work by McCarthy
and Lifschitz,
two graduate students
%one graduate student
and will
pay for the acquisition of two SUN-3 work stations.
The largest part of McCarthy's work since 1956 has been
in the foundations of artificial intelligence, although he has
also worked in developing the concept of time-sharing, in
symbolic computation (LISP) and in the mathematical theory of
computation (proving that computer programs meet their
specifications). Within artificial intelligence, most of his
work has been in the logical formalization of common sense
knowledge and reasoning, although he has worked on heuristic
programming in the past and plans to develop some ideas in this
area in the coming period.
McCarthy developed the situation calculus for expressing
the effects of action and the circumscription method of nonmonotonic
reasoning. Circumscription has been taken up by many competent
researchers, so McCarthy proposes to concentrate on the common
sense knowledge rather than on further development of the
nonmonotonic formalisms. Specifically, McCarthy plans work on
the following problems in the three year period of the proposal.
1. Developing formalisms that can be used to express
facts about the common sense world in a way suitable for
inclusion in a database of knowledge of the world that can
be used by whatever AI systems need to know about the world.
2. Formalizing contexts as objects in a theory. This
is needed to express context dependent facts in a way that
can be used by AI systems.
2. Generalizing situation calculus to cover concurrent
events.
3. Formalizing plans in a way that will permit their
ready modification to deal with unanticipated obstacles. A
start on this, still unpublished, is included as Appendix B.
4. Determining how to include universal generalization
in the reasoning done by AI programs. The problem of the
sterile container, discussed in Appendix A, is an example
of human reasoning using it. To date AI programs don't do
universal generalization.
Appendix A,
{\it Artificial Intelligence, Logic and Formalizing Common Sense}
describes the present state of work in the direction of this
proposal. It will be included in a book edited by Richmond
Thomason about artificial intelligence and philosophical
logic. One of the goals of the paper is to encourage
philosophical logicians to develop formalisms useful
for AI.
\vfill\eject
\noindent Appendix B---Overcoming Unexpected Obstacles
As described in Appendix A, there is a major requirement
for successful behavior in the common sense world that isn't met
by any of the formalisms that have been developed in operations
research and other disciplines for optimizing behavior.
Operations Research formalisms are developed by having the
researcher initially delimit the phenomena that are to be taken
into account. If a fact not taken into account becomes relevant,
it is necessary for a person to revise the formalism. If
computers are to behave intelligently in the common sense world,
they must be able to take into account facts not previously
regarded as relevant.
The following set of situation calculus axioms
gives a simple example of a situation in which a goal can
be achieved by a certain strategy if no unexpected events occur.
Some additional facts are given that allow for an unexpected
event to occur presenting an obstacle. Still other facts allow
this obstacle to be overcome. No revision of the facts is
involved, only taking new facts into account. Therefore, no
human action may be needed to recognize and overcome the
unexpected obstacle.
The original reasoning that
the goal can be achieved is defeasible, because it is
partly nonmonotonic. So is the final conclusion that the goal
can be achieved by overcoming the obstacle, because there is
no proof that there can't be additional obstacles. Of course,
that is what common sense situations are like.
The situation calculus is introduced in (McCarthy and
Hayes, 1969), and the nonmonotonic formalism used in (McCarthy 1986).
The particular way of handling actions nonmonotonically is adapted
from (Lifschitz 1987), reprinted in (Ginsberg 1987).
The Lifschitz paper is the best starting point for understanding
the formulas that follow, so it is included as Appendix C.
All references are included in Appendix A.
We use $holds(p,s)$ to assert that the proposition $p$
holds in situation $s$. $precond(p,a)$ asserts that $p$ holding
is a precondition for the successful performance of action $a$.
Also we use $r$ for the function $result$ of previous papers in
order to shorten the formulas.
We take from Lifschitz the definition
%
$$(∀a s)(succeeds(a,s) ≡ (∀p)(precond(p,a) ⊃ holds(p,s))$$
%
and the axioms
%
$$(∀a p s)(succeeds(a,s) ∧ causes(a,p) ⊃ holds(p,r(a,s))),$$
%
and
%
$$(∀a p s)(¬noninertial(p,a) ∧ holds(p,s) ⊃ holds(p,r(a,s))).$$
%
In order to deal with events that are not actions we add to the
Lifschitz formalism the predicate $occurs(e,s)$ meaning that the
event $e$ occurs in the situation $s$. We extend the
function $r$ to apply to events as well as other actions, so
that $r(e,s)$ denotes the situation that results when event $e$
occurs in situation $s$. We imagine that a sequence of events
occurs after an action, where this sequence may be empty. We
have the axioms
%
$$(∀s)((∀e¬occurs(e,s)) ⊃ outcome(s) = s),$$
%
which asserts that when no events occur the situation remains
unchanged. We also have
%
$$(∀e s)(occurs(e,s) ⊃ outcome(s) = outcome(r(e,s))),$$
%
and we make the definition
%
$$(∀a s)(rr(a,s) = outcome(r(a,s))).$$
These facts are about the achievement of goals in general
associated with them is a {\it circumscription policy} telling
how the nonmonotonic reasoning is to be done. In this case,
it is to circumscribe the predicates $precond$, $causes$, $noninertial$
and $occurs$. Thus actions have only such preconditions as will
follow from the facts given and only those events occur that
are specified, etc.
\bigskip
\noindent An Example
We now come to the particular task---flying from Glasgow to
Moscow via London. We assume that for flying the actor must
have a ticket and the flights must exist. The exceptional
possibility is that the ticket is lost in London, and the
solution is to buy another ticket. That's all there is, but
it carries the essence of the problem of dealing with
unexpected obstacles. We have the axioms:
$$causes(fly(l1,l2),at(l2))$$
%
$$precond(at(l1),fly(l1,l2))$$
%
$$precond(hasticket,fly(l1,l2))$$
%
$$causes(buyticket,hasticket)$$
%
$$precond(flight(l1,l2),fly(l1,l2))$$
$$causes(loseticket, not\,hasticket)$$
$$holds(not\, p,s) ≡ ¬holds(p,s)$$
$$holds(flight(Glasgow,London),S0)$$
%
$$holds(flight(London,Moscow),S0)$$
%
$$holds(hasticket,S0)$$
We can show
%
$$holds(at(Moscow),rr(fly(London,Moscow),rr(fly(Glasgow,London),S0)))$$
%
with the above axioms. However, if we add to these axioms
%
$$occurs(loseticket,r(fly(Glasgow,London),S0)),$$
%
then we can no longer show that the original strategy works. But we
can show
%
$$holds(at(Moscow),rr(fly(London,Moscow),
rr(buyticket,rr(fly(Glasgow,London),S0)))).$$
%
We are presuming that the facts about the possibility of buying
a new ticket is in the database of common sense facts. However, these
facts are not taken into account in the original reasoning. They need
only be invoked when relevant. This corresponds to the way people
deal with obstacles and the way computers will have to do so.
Exactly the same treatment will work when an assumption is introduced that
the traveller breaks his leg. The general facts about getting medical
treatment and acquiring a wheel chair are presumably in the database.
They need only be triggered by the statement that the traveller breaks
his leg in London.
Actually, providing for breaking a leg gives rise to a more complex
epistemological problem. Namely, travellers don't really know about
how medical treatment and wheel chairs are obtained. They only know
that the relevant information can be obtained by appealing to the
airline or airport officials.
\vfill\end